3 research outputs found

    Guided Autonomy for Quadcopter Photography

    Get PDF
    Photographing small objects with a quadcopter is non-trivial to perform with many common user interfaces, especially when it requires maneuvering an Unmanned Aerial Vehicle (C) to difficult angles in order to shoot high perspectives. The aim of this research is to employ machine learning to support better user interfaces for quadcopter photography. Human Robot Interaction (HRI) is supported by visual servoing, a specialized vision system for real-time object detection, and control policies acquired through reinforcement learning (RL). Two investigations of guided autonomy were conducted. In the first, the user directed the quadcopter with a sketch based interface, and periods of user direction were interspersed with periods of autonomous flight. In the second, the user directs the quadcopter by taking a single photo with a handheld mobile device, and the quadcopter autonomously flies to the requested vantage point. This dissertation focuses on the following problems: 1) evaluating different user interface paradigms for dynamic photography in a GPS-denied environment; 2) learning better Convolutional Neural Network (CNN) object detection models to assure a higher precision in detecting human subjects than the currently available state-of-the-art fast models; 3) transferring learning from the Gazebo simulation into the real world; 4) learning robust control policies using deep reinforcement learning to maneuver the quadcopter to multiple shooting positions with minimal human interaction

    Intelligently Assisting Human-Guided Quadcopter Photography

    No full text
    Drones are a versatile platform for both amateur and professional photographers, enabling them to capture photos that are impossible to shoot with ground-based cameras. However, when guided by inexperienced pilots, they have a high incidence of collisions, crashes, and poorly framed photographs. This paper presents an intelligent user interface for photographing objects that is robust against navigation errors and reliably collects high quality photographs. By retaining the human in the loop, our system is faster and more selective than purely autonomous UAVs that employ simple coverage algorithms. The intelligent user interface operates in multiple modes, allowing the user to either directly control the quadcopter or fly in a semi-autonomous mode around a target object in the environment. To evaluate the interface, users completed a data set collection task in which they were asked to photograph objects from multiple views. Our sketch-based control paradigm facilitated task completion, reduced crashes, and was favorably reviewed by the participants
    corecore